人类视频运动转移(HVMT)的目的是鉴于源头的形象,生成了模仿驾驶人员运动的视频。 HVMT的现有方法主要利用生成对抗网络(GAN),以根据根据源人员图像和每个驾驶视频框架估计的流量来执行翘曲操作。但是,由于源头,量表和驾驶人员之间的巨大差异,这些方法始终会产生明显的人工制品。为了克服这些挑战,本文提出了基于gan的新型人类运动转移(远程移动)框架。为了产生逼真的动作,远遥采用了渐进的一代范式:它首先在没有基于流动的翘曲的情况下生成每个身体的零件,然后将所有零件变成驾驶运动的完整人。此外,为了保留自然的全球外观,我们设计了一个全球对齐模块,以根据其布局与驾驶员的规模和位置保持一致。此外,我们提出了一个纹理对准模块,以使人的每个部分都根据纹理的相似性对齐。最后,通过广泛的定量和定性实验,我们的远及以两个公共基准取得了最先进的结果。
translated by 谷歌翻译
利用TRIMAP引导和融合多级功能是具有像素级预测的基于Trimap的垫子的两个重要问题。为了利用Trimap指导,大多数现有方法只需将TRIMAPS和图像连接在一起,以馈送深网络或应用额外的网络以提取更多的TRIMAP指导,这符合效率和效率之间的冲突。对于新兴的基于内容的特征融合,大多数现有的消光方法仅关注本地特征,这些功能缺乏与有趣对象相关的强大语义信息的全局功能的指导。在本文中,我们提出了一种由我们的Trimap引导的非背景多尺度池(TMP)模块和全球本地背景信息融合(GLF)模块组成的Trimap-Goided Feats挖掘和融合网络。考虑到Trimap提供强大的语义指导,我们的TMP模块在Trimap的指导下对有趣的对象进行了有效的特征挖掘,而无需额外参数。此外,我们的GLF模块使用我们的TMP模块开采的有趣物体的全局语义信息,以指导有效的全局本地上下文感知多级功能融合。此外,我们建立了一个共同的有趣的物体消光(CIOM)数据集,以推进高质量的图像消光。在组合物-1K测试集,Alphamatting基准和我们的CIOM测试集上的实验结果表明,我们的方法优于最先进的方法。代码和模型将很快公开发布。
translated by 谷歌翻译
最近,基于模板的跟踪器已成为领先的跟踪算法,在效率和准确性方面具有希望的性能。然而,查询特征与给定模板之间的相关操作仅利用准确的目标本地化,导致状态估计误差,特别是当目标遭受严重可变形变化时。为了解决这个问题,已经提出了基于分段的跟踪器,以便使用每像素匹配来有效地提高可变形物体的跟踪性能。然而,大多数现有跟踪器仅指初始帧中的目标特征,从而缺乏处理具有挑战性因素的辨别能力,例如,类似的分心,背景杂乱,外观变化等。在此目的,我们提出了一种动态的紧凑型存储器嵌入以增强基于分段的可变形视觉跟踪方法的辨别。具体而言,我们初始化与第一帧中的目标功能嵌入的内存嵌入。在跟踪过程中,与现有内存具有高相关的当前目标特征被更新为在线嵌入的内存。为了进一步提高可变形对象的分割精度,我们采用了点对集的匹配策略来测量像素 - 方向查询特征和整个模板之间的相关性,以捕获更详细的变形信息。关于六个具有挑战性的跟踪基准的广泛评估,包括VOT2016,VOT2018,VOT2019,GOT-10K,TrackingNet和莱斯特展示了我们对近期近似追踪者的方法的优势。此外,我们的方法优于基于出色的基于分段的跟踪器,即DVIS2017基准测试。
translated by 谷歌翻译
人群计数旨在了解人群密度分布并估计图像中对象(例如人)的数量。观点效应显着影响数据点的分布,在人群计数中起着重要作用。在本文中,我们提出了一种新颖的视角方法,称为Panet,以解决观点问题。基于观察到,由于透视效果,对象的大小在一个图像中变化很大,我们提出了动态接收场(DRF)框架。该框架能够根据输入图像通过扩张的卷积参数来调整接收场,这有助于该模型为每个局部区域提取更具区别的特征。与以前的大多数使用高斯内核来生成密度图作为监督信息的作品不同,我们提出了自我缩减监督(SDS)培训方法。从第一个训练阶段完善了地面图密度图,并在第二阶段将视角信息提炼为模型。 shanghaitech part_a和part_b,ucf_qnrf和ucf_cc_50数据集的实验结果表明,我们的拟议锅et的表现优于最先进的方法。
translated by 谷歌翻译
The detection of human body and its related parts (e.g., face, head or hands) have been intensively studied and greatly improved since the breakthrough of deep CNNs. However, most of these detectors are trained independently, making it a challenging task to associate detected body parts with people. This paper focuses on the problem of joint detection of human body and its corresponding parts. Specifically, we propose a novel extended object representation that integrates the center location offsets of body or its parts, and construct a dense single-stage anchor-based Body-Part Joint Detector (BPJDet). Body-part associations in BPJDet are embedded into the unified representation which contains both the semantic and geometric information. Therefore, BPJDet does not suffer from error-prone association post-matching, and has a better accuracy-speed trade-off. Furthermore, BPJDet can be seamlessly generalized to jointly detect any body part. To verify the effectiveness and superiority of our method, we conduct extensive experiments on the CityPersons, CrowdHuman and BodyHands datasets. The proposed BPJDet detector achieves state-of-the-art association performance on these three benchmarks while maintains high accuracy of detection. Code will be released to facilitate further studies.
translated by 谷歌翻译
Most recent head pose estimation (HPE) methods are dominated by the Euler angle representation. To avoid its inherent ambiguity problem of rotation labels, alternative quaternion-based and vector-based representations are introduced. However, they both are not visually intuitive, and often derived from equivocal Euler angle labels. In this paper, we present a novel single-stage keypoint-based method via an {\it intuitive} and {\it unconstrained} 2D cube representation for joint head detection and pose estimation. The 2D cube is an orthogonal projection of the 3D regular hexahedron label roughly surrounding one head, and itself contains the head location. It can reflect the head orientation straightforwardly and unambiguously in any rotation angle. Unlike the general 6-DoF object pose estimation, our 2D cube ignores the 3-DoF of head size but retains the 3-DoF of head pose. Based on the prior of equal side length, we can effortlessly obtain the closed-form solution of Euler angles from predicted 2D head cube instead of applying the error-prone PnP algorithm. In experiments, our proposed method achieves comparable results with other representative methods on the public AFLW2000 and BIWI datasets. Besides, a novel test on the CMU panoptic dataset shows that our method can be seamlessly adapted to the unconstrained full-view HPE task without modification.
translated by 谷歌翻译
Scene text editing (STE) aims to replace text with the desired one while preserving background and styles of the original text. However, due to the complicated background textures and various text styles, existing methods fall short in generating clear and legible edited text images. In this study, we attribute the poor editing performance to two problems: 1) Implicit decoupling structure. Previous methods of editing the whole image have to learn different translation rules of background and text regions simultaneously. 2) Domain gap. Due to the lack of edited real scene text images, the network can only be well trained on synthetic pairs and performs poorly on real-world images. To handle the above problems, we propose a novel network by MOdifying Scene Text image at strokE Level (MOSTEL). Firstly, we generate stroke guidance maps to explicitly indicate regions to be edited. Different from the implicit one by directly modifying all the pixels at image level, such explicit instructions filter out the distractions from background and guide the network to focus on editing rules of text regions. Secondly, we propose a Semi-supervised Hybrid Learning to train the network with both labeled synthetic images and unpaired real scene text images. Thus, the STE model is adapted to real-world datasets distributions. Moreover, two new datasets (Tamper-Syn2k and Tamper-Scene) are proposed to fill the blank of public evaluation datasets. Extensive experiments demonstrate that our MOSTEL outperforms previous methods both qualitatively and quantitatively. Datasets and code will be available at https://github.com/qqqyd/MOSTEL.
translated by 谷歌翻译
As an effective method to deliver external materials into biological cells, microinjection has been widely applied in the biomedical field. However, the cognition of cell mechanical property is still inadequate, which greatly limits the efficiency and success rate of injection. Thus, a new rate-dependent mechanical model based on membrane theory is proposed for the first time. In this model, an analytical equilibrium equation between the injection force and cell deformation is established by considering the speed effect of microinjection. Different from the traditional membrane-theory-based model, the elastic coefficient of the constitutive material in the proposed model is modified as a function of the injection velocity and acceleration, effectively simulating the influence of speeds on the mechanical responses and providing a more generalized and practical model. Using this model, other mechanical responses at different speeds can be also accurately predicted, including the distribution of membrane tension and stress and the deformed shape. To verify the validity of the model, numerical simulations and experiments are carried out. The results show that the proposed model can match the real mechanical responses well at different injection speeds.
translated by 谷歌翻译
Scene text spotting is of great importance to the computer vision community due to its wide variety of applications. Recent methods attempt to introduce linguistic knowledge for challenging recognition rather than pure visual classification. However, how to effectively model the linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting. Firstly, the autonomous suggests enforcing explicitly language modeling by decoupling the recognizer into vision model and language model and blocking gradient flow between both models. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for the language model which can effectively alleviate the impact of noise input. Finally, to polish ABINet++ in long text recognition, we propose to aggregate horizontal features by embedding Transformer units inside a U-Net, and design a position and content attention module which integrates character order and content to attend to character features precisely. ABINet++ achieves state-of-the-art performance on both scene text recognition and scene text spotting benchmarks, which consistently demonstrates the superiority of our method in various environments especially on low-quality images. Besides, extensive experiments including in English and Chinese also prove that, a text spotter that incorporates our language modeling method can significantly improve its performance both in accuracy and speed compared with commonly used attention-based recognizers.
translated by 谷歌翻译
Each student matters, but it is hardly for instructors to observe all the students during the courses and provide helps to the needed ones immediately. In this paper, we present StuArt, a novel automatic system designed for the individualized classroom observation, which empowers instructors to concern the learning status of each student. StuArt can recognize five representative student behaviors (hand-raising, standing, sleeping, yawning, and smiling) that are highly related to the engagement and track their variation trends during the course. To protect the privacy of students, all the variation trends are indexed by the seat numbers without any personal identification information. Furthermore, StuArt adopts various user-friendly visualization designs to help instructors quickly understand the individual and whole learning status. Experimental results on real classroom videos have demonstrated the superiority and robustness of the embedded algorithms. We expect our system promoting the development of large-scale individualized guidance of students.
translated by 谷歌翻译